CUDA LAB - Generative Adversarial Networks

Required Libraries

Observations:

1) Latent spaces of size 10,100,150,200,1000 and 3000 were explored however, they did not seem to change the outcome much. The model just took longer in the beginning to start giving some shapes and color to the generated images as the latent size was increased.

2) Latent spaces of size 100 was the most experimented upon with changing layers in conv2d and convtranspose2d. However, the most influencial parameter was the feature maps i.e. in_channels and out_channels.

3) Below you will see "ngf" and "ndf" parameters controlling the feature maps. When they were set to low as 16 or 24, the images generated were not as nice as generated when it was increased to 32 or 64 which I thought seemed obsvious as we now have more parameters to learn however, the model became too slow.

4) One of the model was running for 500+ epochs on google collab and it started to generate a lot nicer images but google collab disconnected at some point and we only have some downloaded images to show for that particular model. Rest of the models have images and logs saved.

5) The models were quick to learn about different colors and shapes however, even after numerous epochs, the images generated had features from different labels. Like, a mixture of animals and something with vehicles. One model started to generate images that looked like an airplane, car, especially frog and some understanding of water surrounding an object which we hope is a ship :)

Sample for testing how the tensorboard looks like

Now making doirectories for GAN logs (Both images and losses on tensorboard)

CIFAR10 Dataset Used

Parameters

Instead of the helper function from session 7, the helper function was used from https://pytorch.org/tutorials/beginner/dcgan_faces_tutorial.html

NOTE: The followng images in this notebook are sample images trained on just 25epochs. For the images trained for 100, 200 and 500 epochs please check this google collab link : https://drive.google.com/drive/folders/1GgihpCdH7ty9ZPsEJzN9ZxTJpB-7ax_y?usp=sharing

Some images generated and animated visualisation of how the images are changing.

Plotting the losses over the iteration steps.

Different parameters gave similar looking graphs for losses with generative loss most likely decreasing in the beginning and then starts increasing again.

Some more images showing the difference between Real and Fake Images.

Counting the model parameters.

END of File